Incremental Sequence Learning
نویسندگان
چکیده
As linguistic competence so clearly illustrates, processing sequences of events is a fundamental aspect of human cognition. For this reason perhaps, sequence learning behavior currently attracts considerable attention in both cognitive psychology and computational theory. In typical sequence learning situations, participants are asked to react to each element of sequentially structured visual sequences of events. An important issue in this context is to determine whether essentially associative processes are sufficient to understand human performance, or whether more powerful learning mechanisms are necessary. To address this issue, we explore how well human participants and connectionist models are capable of learning sequential material that involves complex, disjoint, longdistance contingencies. We show that the popular Simple Recurrent Network model (Elman, 1990), which has otherwise been shown to account for a variety of empirical findings (Cleeremans, 1993), fails to account for human performance in several experimental situations meant to test the model’s specific predictions. In previous research (Cleeremans, 1993) briefly described in this paper, the structure of center-embedded sequential structures was manipulated to be strictly identical or probabilistically different as a function of the elements surrounding the embedding. While the SRN could only learn in the second case, human subjects were found to be insensitive to the manipulation. In the new experiment described in this paper, we tested the idea that performance benefits from “starting small effects” (Elman, 1993) by contrasting two conditions in which the training regimen was either incremental or not. Again, while the SRN is only capable of learning in the first case, human subjects were able to learn in both. We suggest an alternative model based on Maskara & Noetzel’s (1991) Auto-Associative Recurrent Network as a way to overcome the SRN model’s failure to account for the empirical findings.
منابع مشابه
Incremental Sequence Learning
Deep learning research over the past years has shown that by increasing the scope or difficulty of the learning problem over time, increasingly complex learning problems can be addressed. We study incremental learning in the context of sequence learning, using generative RNNs in the form of multi-layer recurrent Mixture Density Networks. While the potential of incremental or curriculum learning...
متن کاملA Hybrid Framework for Building an Efficient Incremental Intrusion Detection System
In this paper, a boosting-based incremental hybrid intrusion detection system is introduced. This system combines incremental misuse detection and incremental anomaly detection. We use boosting ensemble of weak classifiers to implement misuse intrusion detection system. It can identify new classes types of intrusions that do not exist in the training dataset for incremental misuse detection. As...
متن کاملIncremental Learning in Inductive Programming
Inductive programming systems characteristically exhibit an exponential explosion in search time as one increases the size of the programs to be generated. As a way of overcoming this, we introduce incremental learning, a process in which an inductive programming system automatically modifies its inductive bias towards some domain through solving a sequence of gradually more difficult problems ...
متن کاملIncremental Discretization for Naïve-Bayes Classifier
Naïve-Bayes classifiers (NB) support incremental learning. However, the lack of effective incremental discretization methods has been hindering NB’s incremental learning in face of quantitative data. This problem is further compounded by the fact that quantitative data are everywhere, from temperature readings to share prices. In this paper, we present a novel incremental discretization method ...
متن کاملWeighing Hypotheses: Incremental Learning from Noisy Data
Incremental learning from noisy data presents dual challenges: that of evaluating multiple hypotheses incrementally and that of distinguishing errors due to noise from errors due to faulty hypotheses. This problem is critical in such areas of machine learning as concept learning, inductive programming, and sequence prediction. I develop a general, quantitative method for weighing the merits of ...
متن کاملIncremental Learning from Positive Data
The present paper deals with a systematic study of incremental learning algorithms. The general scenario is as follows. Let c be any concept; then every innnite sequence of elements exhausting c is called positive presentation of c. An algorith-mic learner successively takes as input one element of a positive presentation as well as its previously made hypothesis at a time, and outputs a new hy...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1998